摘要 :
Programmable Blockchain brings a new type of decentralized applications (Dapps) that facilitate transfer of assets across users without a third party. The popularity of Ethereum Dapps brings themselves great security risks: they h...
展开
Programmable Blockchain brings a new type of decentralized applications (Dapps) that facilitate transfer of assets across users without a third party. The popularity of Ethereum Dapps brings themselves great security risks: they have been under various kinds of attacks from cybercriminals to gain profit. As the back-end of Dapps, smart contracts have been exploited their programming errors to steal cryptocurrency or tokens. Multiple approaches have been proposed to detect unsafe contracts. This paper presents a Blockchain Safe Browsing (BSB) platform to effectively disseminate smart contract detection results to contract users, and vulnerable contract owners. Based on those results, contract blacklist can be generated to provide user warning service, which is used to warn users before making transactions with unsafe contracts. Meanwhile, a contract owner notify mechanism is developed to help contract owners study the vulnerability details of their contract so that they can patch the vulnerabilities in time. Among the mechanism, the researchers will gain profits from shared data, which in turn inspire them keep uploading their research results. Moreover, as the most valuable asset for the researchers, vulnerability exploit details will be encrypted before uploading, and can only be decrypted by contract owners, which prevent the details being leaked and utilized by cybercriminals. Extensive evaluations using real datasets (with 2,880 unsafe contracts) demonstrate that our prototype can function as intended without sacrificing user experience, and warn users at the millisecond level.
收起
摘要 :
Facial makeup transfer aims to render a non-makeup face image in an arbitrary given makeup one while preserving face identity. The most advanced method separates makeup style information from face images to realize makeup transfer...
展开
Facial makeup transfer aims to render a non-makeup face image in an arbitrary given makeup one while preserving face identity. The most advanced method separates makeup style information from face images to realize makeup transfer. However, makeup style includes several semantic clear local styles which are still entangled together. In this paper, we propose a novel unified adversarial disentangling network to further decompose face images into four independent components, i.e., personal identity, lips makeup style, eyes makeup style and face makeup style. Owing to the disentangled makeup representation, our method can not only flexible control the degree of local makeup styles, but also can transfer local makeup styles from different images into the final result, which any other approaches fail to handle. For makeup removal, different from other methods which regard makeup removal as the reverse process of makeup transfer, we integrate the makeup transfer with the makeup removal into one uniform framework and obtain multiple makeup removal results. Extensive experiments have demonstrated that our approach can produce visually pleasant and accurate makeup transfer results compared to the state-of-the-art methods.
收起
摘要 :
In this paper, we propose an approach to tackle the problem of the automatic restoration of Arabic diacritics that includes three components stacked in a pipeline: a deep learning model which is a multi-layer recurrent neural netw...
展开
In this paper, we propose an approach to tackle the problem of the automatic restoration of Arabic diacritics that includes three components stacked in a pipeline: a deep learning model which is a multi-layer recurrent neural network with LSTM and Dense layers, a character-level rule-based corrector which applies deterministic operations to prevent some errors, and a word-level statistical corrector which uses the context and the distance information to fix some diacritization issues. This approach is novel in a way that combines methods of different types and adds edit distance based corrections. We used a large public dataset containing raw diacritized Arabic text (Tashkeela) for training and testing our system after cleaning and normalizing it. On a newly-released benchmark test set, our system outperformed all the tested systems by achieving DER of 3.39% and WER of 9.94% when taking all Arabic letters into account, DER of 2.61% and WER of 5.83% when ignoring the diacritization of the last letter of every word.
收起
摘要 :
In this paper, we propose an approach to tackle the problem of the automatic restoration of Arabic diacritics that includes three components stacked in a pipeline: a deep learning model which is a multi-layer recurrent neural netw...
展开
In this paper, we propose an approach to tackle the problem of the automatic restoration of Arabic diacritics that includes three components stacked in a pipeline: a deep learning model which is a multi-layer recurrent neural network with LSTM and Dense layers, a character-level rule-based corrector which applies deterministic operations to prevent some errors, and a word-level statistical corrector which uses the context and the distance information to fix some diacritization issues. This approach is novel in a way that combines methods of different types and adds edit distance based corrections. We used a large public dataset containing raw diacritized Arabic text (Tashkeela) for training and testing our system after cleaning and normalizing it. On a newly-released benchmark test set, our system outperformed all the tested systems by achieving DER of 3.39% and WER of 9.94% when taking all Arabic letters into account, DER of 2.61% and WER of 5.83% when ignoring the diacritization of the last letter of every word.
收起
摘要 :
Facial makeup transfer aims to render a non-makeup face image in an arbitrary given makeup one while preserving face identity. The most advanced method separates makeup style information from face images to realize makeup transfer...
展开
Facial makeup transfer aims to render a non-makeup face image in an arbitrary given makeup one while preserving face identity. The most advanced method separates makeup style information from face images to realize makeup transfer. However, makeup style includes several semantic clear local styles which are still entangled together. In this paper, we propose a novel unified adversarial disentangling network to further decompose face images into four independent components, i.e., personal identity, lips makeup style, eyes makeup style and face makeup style. Owing to the disentangled makeup representation, our method can not only flexible control the degree of local makeup styles, but also can transfer local makeup styles from different images into the final result, which any other approaches fail to handle. For makeup removal, different from other methods which regard makeup removal as the reverse process of makeup transfer, we integrate the makeup transfer with the makeup removal into one uniform framework and obtain multiple makeup removal results. Extensive experiments have demonstrated that our approach can produce visually pleasant and accurate makeup transfer results compared to the state-of-the-art methods.
收起
摘要 :
In this paper, we analyse how double auction marketplaces set an effective pricing policy to determine the transaction prices for matched buyers and sellers. We analyse this problem by considering continuous privately known trader...
展开
In this paper, we analyse how double auction marketplaces set an effective pricing policy to determine the transaction prices for matched buyers and sellers. We analyse this problem by considering continuous privately known trader types. Furthermore, we consider two typical pricing policies: equilibrium k pricing policy and discriminatory k pricing policy. We firstly investigate how to determine the transaction prices to reach the maximal allocative efficiency in an isolated marketplace when the traders adopt Bayes-Nash equilibrium bidding strategies. We find that when the marketplace adopts discriminatory k pricing policy, the maximal allocative efficiency is reached by setting k = 0.41 or 0.59. We find that equilibrium k pricing policy provides higher allocative efficiency than discriminatory k pricing policy. We further discuss how different pricing policies can affect traders' Bayes-Nash equilibrium bidding strategies. Furthermore, we extend the analysis to the setting with two marketplaces competing against each other to attract traders. We find that the marketplace using equilibrium k pricing policy is more likely to beat the marketplace using discriminatory k pricing policy, where all traders converge to the marketplace using equilibrium k pricing policy in Bayes-Nash equilibrium. Our analysis can provide meaningful insights for designing an effective pricing policy.
收起
摘要 :
In this paper, we analyse how double auction marketplaces set an effective pricing policy to determine the transaction prices for matched buyers and sellers. We analyse this problem by considering continuous privately known trader...
展开
In this paper, we analyse how double auction marketplaces set an effective pricing policy to determine the transaction prices for matched buyers and sellers. We analyse this problem by considering continuous privately known trader types. Furthermore, we consider two typical pricing policies: equilibrium k pricing policy and discriminatory k pricing policy. We firstly investigate how to determine the transaction prices to reach the maximal allocative efficiency in an isolated marketplace when the traders adopt Bayes-Nash equilibrium bidding strategies. We find that when the marketplace adopts discriminatory k pricing policy, the maximal allocative efficiency is reached by setting k = 0.41 or 0.59. We find that equilibrium k pricing policy provides higher allocative efficiency than discriminatory k pricing policy. We further discuss how different pricing policies can affect traders' Bayes-Nash equilibrium bidding strategies. Furthermore, we extend the analysis to the setting with two marketplaces competing against each other to attract traders. We find that the marketplace using equilibrium k pricing policy is more likely to beat the marketplace using discriminatory k pricing policy, where all traders converge to the marketplace using equilibrium k pricing policy in Bayes-Nash equilibrium. Our analysis can provide meaningful insights for designing an effective pricing policy.
收起
摘要 :
Existing entity alignment models mainly use the topology structure of the original knowledge graph and have achieved promising performance. However, they are still challenged by the heterogeneous topological neighborhood structure...
展开
Existing entity alignment models mainly use the topology structure of the original knowledge graph and have achieved promising performance. However, they are still challenged by the heterogeneous topological neighborhood structures, which could cause the models to produce different representations of counterpart entities. In the paper, we propose a global entity alignment model with gated latent space neighborhood aggregation (LatsEA) to address this challenge. Latent space neighborhood is formed by calculating the similarity between the entity embeddings, it can introduce long-range neighbors to expand the topological neighborhood and reconcile the heterogeneous neighborhood structures. Meanwhile, it uses vanilla GCN to aggregate the topological neighborhood and latent space neighborhood respectively. Then, it uses an average gating mechanism to aggregate topological neighborhood information and latent space neighborhood information of the central entity. In order to further consider the interdependence between entity alignment decisions, we propose a global entity alignment strategy, i.e., formulate entity alignment as the maximum bipartite matching problem, which is effectively solved by Hungarian algorithm. Our experiments with ablation studies on three real-world entity alignment datasets prove the effectiveness of the proposed model. Latent space neighborhood information and global entity alignment decisions both contributes to the entity alignment performance improvement.
收起
摘要 :
Existing entity alignment models mainly use the topology structure of the original knowledge graph and have achieved promising performance. However, they are still challenged by the heterogeneous topological neighborhood structure...
展开
Existing entity alignment models mainly use the topology structure of the original knowledge graph and have achieved promising performance. However, they are still challenged by the heterogeneous topological neighborhood structures, which could cause the models to produce different representations of counterpart entities. In the paper, we propose a global entity alignment model with gated latent space neighborhood aggregation (LatsEA) to address this challenge. Latent space neighborhood is formed by calculating the similarity between the entity embeddings, it can introduce long-range neighbors to expand the topological neighborhood and reconcile the heterogeneous neighborhood structures. Meanwhile, it uses vanilla GCN to aggregate the topological neighborhood and latent space neighborhood respectively. Then, it uses an average gating mechanism to aggregate topological neighborhood information and latent space neighborhood information of the central entity. In order to further consider the interdependence between entity alignment decisions, we propose a global entity alignment strategy, i.e., formulate entity alignment as the maximum bipartite matching problem, which is effectively solved by Hungarian algorithm. Our experiments with ablation studies on three real-world entity alignment datasets prove the effectiveness of the proposed model. Latent space neighborhood information and global entity alignment decisions both contributes to the entity alignment performance improvement.
收起
摘要 :
This paper proposes a novel multi-scale descriptor for shape recognition. The contour of shape is represented by a sequence of sample points with uniform spacing. Straight lines connected between two moving contour points are used...
展开
This paper proposes a novel multi-scale descriptor for shape recognition. The contour of shape is represented by a sequence of sample points with uniform spacing. Straight lines connected between two moving contour points are used to cut the shape. The lengths of the contour segments between the two sampled contour points determine the levels of scales. Then the geometric features of the cut contour and the interior texture features around the straight lines are extracted at each scale. This method not only has the powerful discriminability to describe a shape from coarse to fine, but also is invariant to scale, rotation, translation and mirror transformations. Experiments conducted on five image datasets (COIL-20, Flavia, Swedish, Leaf100 and ETH-80) demonstrate that the proposed method significantly outperforms the state-of-the-art methods.
收起